Non-record 1xH100 backoff7gram zlib-fallback sign-of-life (val_bpb 0.9209)#767
Non-record 1xH100 backoff7gram zlib-fallback sign-of-life (val_bpb 0.9209)#767RichiiiTV wants to merge 1 commit intoopenai:mainfrom
Conversation
Community Review — Non-record 1xH100 backoff7gram zlib-fallback sign-of-life (val_bpb 0.9209)Compliance: NEEDS AUTHOR ACTION — What I found: The CPU smoke test on CT2038 (proteus-engine, 128 GB RAM, Triton 3.6.0, flash_attn stub, cutlass_evt_fusion stub) failed at the import step with: A few of the common patterns I've seen for this class of error in the 2026-04-11 sweep:
Recommendation: Could you run Once the parse/import issue is fixed, I'll re-run the compliance audit through the normal pipeline. No other flags identified yet because the audit halts at the import step. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): IMPORT_FAIL — syntax error at line 1: invalid non-printable character U+FEFF. Classification via |
Retraction — this IMPORT_FAIL was a UTF-8 BOM handling bug in my classifierSorry @RichiiiTV, this one's on me. Your CPU smoke test on CT2038 actually passed — the IMPORT_FAIL I reported above came from a separate classifier step, and it was a bug in the classifier, not in your code. What happened: My classifier does an The smoke runner's Your PR is not broken. Python accepts BOMs; my classifier's ast walk was buggy. I'm retracting the IMPORT_FAIL classification. I'll re-queue the compliance audit now that the BOM-handling bug is identified and post findings separately. Again — sorry for the noise. |
Community Review — Non-record 1xH100 backoff7gram zlib-fallback sign-of-life (val_bpb 0.9209)BPB: 0.9209 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache What I found in the code (head SHA Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.03s, dim=512, layers=11, vocab=1024, code=74200 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline. Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.03s, dim=512, layers=11, vocab=1024, code=74200 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
This PR adds a non-record 16MB submission for a 1xH100 signs-of-life run of the compacted
#753-style root lane.Key points:
This is not a leaderboard-valid record submission:
I am submitting it as a non-record sign-of-life because it demonstrates how strong the legal score-first
#753-style backoff evaluator remains even when the underlying dense model is far from the intended 8xH100 training regime.